A public forum for people to chat about getting into the marketplace
PermalinkOnce a marketplace item has been flagged "has not been reviewed by the prb". Does the developer need to resubmit said item if they want the flag removed? You said they [the developer] would hit a checkbox to have the item publish. Will there be a "Don't publish this, because I still want it reviewed" box as well? Something to get the attention of the admins, or at least dump it back into the queue?
Is there a window after which items that don't pass the linter will automatically be rejected and removed from the queue?
I am afraid "has not been approved by the prb" might sound like "has issues" for many buyers.
Should addons with low ratings go through the PRB / linter again to sort of clear them out of the approved market listing? A few of the eCommerce addons were approved at some point, but clearly fail to work (one even had a note from the Core Team added in saying the developer isn't responding and the addon clearly doesn't work)
As I am already a PRB member, will I have to re-apply to be in it?
http://www.concrete5.org/about/blog/open-source-and-strategy/prb-re...
c
http://www.concrete5.org/marketplace/linter...
I have not tried the page, but you should be able to use that to check packages before uploading to the marketplace or PRB.
The new PRB strategy is aimed at being faster and more responsive. However, to achieve that it will also need to be more brutally direct.
- Red thumbs will be used liberally whenever any observation is made that precludes approval (no more being kind and just leaving a thumb blank pending resolution).
- Green thumbs will only be given when a review heading is good enough for approval.
- The PRB will no longer work as an educational forum. If a developer/submission obviously needs significant assistance, then the submission will be rejected and the developer should use this 'Submit to Marketplace' forum to get whatever help they need before resubmitting. http://www.concrete5.org/community/forums/submitting-to-the-marketp...
- We have an approval by default time limit. Approval by default is in many ways a very second class level of approval. It will be much better for developers to address issues quickly and get real approval.
Each click is cumulative!
Following auto approval a submission is currently still accessible in the PRB so can gather further reviews and the seal of approval later.
But in at least one case nothing seems wrong at all.
Dojo Page filter didn't fail any test, reviews are all good, it says to be compatible will latest versions...
So I wonder, why was it pulled from the market?
I think in the case of Dojo Page Filter, it may have been superseded with Dojo Page Filter Pro.
I have sent a PM to Dojo to check.
1. Minimum version matches controller.php
This add-on is quite old, it has older versions that support 5.4 as well. However, all new versions of the add-on don't support versions back to 5.4 or older for reasons we all know.
Now this test is failing because the concrete5 add-on page has been marked for the minimum version of 5.4.2 but in the latest version's controller.php we require 5.5.0.
2. 3rd party libraries included in the checks
I'm not sure if it's a good idea to fork each and every 3rdparty library we include in the package. Now these tests are failing because of a included 3rdparty library:
- Don't use json_encode / json_decode functions
- Each PHP file contains exec or die statements
Otherwise, really great we see these test running against updated versions as well! I'm sure this will increase the quality of the add-ons even more. And on the other hand, really good to see that these are not "show stoppers" to get the updates pushed to the customers, e.g. for the reasons I've pointed out above.
Antti / Mainio
Really great to see the new PRB policy! I'm sure this will get the marketplace growing more quickly and I hope this will have some effect on the quality as well as the PRB expects people to know this stuff prior to submitting and the PRB no longer acts as an educational board.
On the other hand, it would be great if there was a PRB-like reviewing system for novices so that they could get their code right and learn how to build better concrete5 packages. I know there is already this forum but if you'd like to share code here, it would be public for everyone.
By the way, where can we get the code of linter to run the tests ourselves? I believe it was in the blog post by @frz that you've open sourced the code...? I see this repository at github:
https://github.com/concrete5/prb_lint_tests...
But I don't see all the tests there. (or am I just going blind?)
If you post me a link to the addon, I can approve linter exceptions, so the new version will become downloadable. Exceptions are sticky, so once you have them for an addon, you shouldn't have problems in the future.
Developers can test against the linter as per
http://www.concrete5.org/community/forums/submitting-to-the-marketp...
And do we really need to get the updates re-approved? I might need to take some of my praising words back if that's the case...
I'll send you the link.
Hopefully once a few exemptions are checked the overhead of approving updates will disappear.
E.g. some are still using PHP 5.2 because the core still works with 5.2 (fortunately, not for long) but we only test our add-ons with PHP 5.3+.
If you didn't mention it and if I didn't post to this thread, I actually would've though (like I did) that the update has been automatically pushed to the customers. There's nothing telling me that these are show stoppers if the add-on is perfectly valid but the tests mentioned above did not pass because of reasons out of the add-on's scope.
But on the other hand, I'm not sure if it helps the underlying problem any more than the previous situation. Now, e.g. in this particular case if you marked the "Each PHP file contains exec or die statements" to be bypassed as sticky (for all future versions), what prevents us from adding another file without the die statement to the package? And in that case, the PRB standards would no longer be followed, which makes the check basically useless if the goal has been to make sure the add-ons don't diverge from the standards.
I like that I can see if any tests fail when I update the add-on (although I would like to run these tests already on my local machine if they were in the github repo) but I don't think the middle-man process brings anything to the table if the add-on has already been approved once.
I agree that it's a bit problematic to bypass the middle man if someone updates the add-on to "totally revamped and more awesome v2.0" where everything has changed but in minor updates / bug fixes, I think this is a total waste of everyone's time and completely unnecessary. And as pointed above, does not add any more control over following the PRB standards.
I also PMd frz with the points I raised here.
Running updates through the same process that new submissions go through makes a lot of sense to us. Historically there was nothing keeping someone from submitting an add-on that was "hello world" at version one and "rm *.*" on version 2. To a customer coming to the marketplace getting that add-on at version 2, we've totally failed on our basic promise of 'it works and wont destroy your site' completely. They wouldn't care that some version ages ago was tested.
As John pointed out, exceptions are sticky, so while we do expect some initial influx of traffic as things get grandfathered in, in the big picture we see this as adding a lot of value to both customers and developers.
There is a linter you can use to test. We didn't open source the existing tests because the last thing we need is a bunch of bright college students proving that our tests are imperfect in some way. We did open source the testing framework, because we'd love bright college students to think up additional tests and send them our way.
Hope that helps clarify.
But I still don't understand the Microsoft-way of thinking behind opening up the tests if you've opened up the framework itself. I could understand this point of view if the whole linter was still closed source. There's so little software that's perfect so why should you worry about that? And if there are some actual ISSUES in the tests, I'm sure they will be fixed much quicker if they are open source.
And yes, I know there's the linter tool at c5.org that has been mentioned a couple of times but it would just make things easier if the tests could be run locally, as it would automate part of the add-on's publishing process. I'm sure you've yourselves have had at least some benefit of having the CI tests in place at the c5 github repo, right?
*disregard, it's working now.